保留保护解决方案使公司能够在履行政府法规的同时将机密数据卸载到第三方服务。为了实现这一点,它们利用了各种密码技术,例如同性恋加密(HE),其允许对加密数据执行计算。大多数他计划以SIMD方式工作,数据包装方法可以显着影响运行时间和内存成本。找到导致最佳性能实现的包装方法是一个艰难的任务。我们提出了一种简单而直观的框架,摘要为用户提供包装决定。我们解释其底层数据结构和优化器,并提出了一种用于执行2D卷积操作的新算法。我们使用此框架来实现他友好的AlexNet版本,在三分钟内运行,比其他最先进的解决方案更快的数量级,只能使用他。
translated by 谷歌翻译
In recent years, denoising diffusion models have demonstrated outstanding image generation performance. The information on natural images captured by these models is useful for many image reconstruction applications, where the task is to restore a clean image from its degraded observations. In this work, we propose a conditional sampling scheme that exploits the prior learned by diffusion models while retaining agreement with the observations. We then combine it with a novel approach for adapting pretrained diffusion denoising networks to their input. We examine two adaption strategies: the first uses only the degraded image, while the second, which we advocate, is performed using images that are ``nearest neighbors'' of the degraded image, retrieved from a diverse dataset using an off-the-shelf visual-language model. To evaluate our method, we test it on two state-of-the-art publicly available diffusion models, Stable Diffusion and Guided Diffusion. We show that our proposed `adaptive diffusion for image reconstruction' (ADIR) approach achieves a significant improvement in the super-resolution, deblurring, and text-based editing tasks.
translated by 谷歌翻译
在本文中,我们引入了一个新型的神经网络训练框架,该框架增加了模型对对抗性攻击的对抗性鲁棒性,同时通过将对比度学习(CL)与对抗性训练(AT)结合在一起,以保持高清洁精度。我们建议通过学习在数据增强和对抗性扰动下保持一致的特征表示来提高对对抗性攻击的模型鲁棒性。我们利用对比的学习来通过将对抗性示例视为另一个积极的例子来提高对抗性的鲁棒性,并旨在最大化数据样本的随机增强及其对抗性示例之间的相似性,同时不断更新分类头,以避免在认知解离之间分类头和嵌入空间。这种分离是由于CL将网络更新到嵌入空间的事实引起的,同时冻结用于生成新的积极对抗示例的分类头。我们在CIFAR-10数据集上验证了我们的方法,具有对抗性特征(CLAF)的对比度学习,在该数据集上,它在替代监督和自我监督的对抗学习方法上均优于强大的精度和清洁精度。
translated by 谷歌翻译